fix: include incoming message size when checking against batch size#52
fix: include incoming message size when checking against batch size#52kwvg wants to merge 3 commits intovthiery:masterfrom
Conversation
|
@kwvg the documentation https://github.com/vthiery/cpp-statsd-client?tab=readme-ov-file#batching explicitly states that the batch size is not a hard limit (indeed it cant be without quite a bit of annoyance, eg. in the case where the batch size is so small a single message wouldnt fit). so i wouldnt characterize this change as a "fix" in this case. this is merely just more of an attempt to make batch size a hard limit right? but even still, the change here does not make it a hard limit specifically in the example i mentioned above. i also see that you keep the additional memory reservation when adding a new batch, which makes no sense if we are trying to stick to a hard limit. at any rate, i think we should first discuss the merits of a hard limit vs the practical reality. perhaps we can throw or something if someone wants to make a batch size that is too small to be reasonable so as to avoid the pitfall i mentioned above? at present i dont feel strong about having a hard limit vs a soft limit. the reason why we went with a soft one though in the first place was simply because it was just quite a bit less difficult to adhere to logically 😄 |
| // Either we don't have a place to batch our message or we are about to exceed the batch size, so make a new batch | ||
| if (m_batchingMessageQueue.empty() || m_batchingMessageQueue.back().size() + message.size() > m_batchsize) { | ||
| m_batchingMessageQueue.emplace_back(); | ||
| m_batchingMessageQueue.back().reserve(m_batchsize + 256); |
There was a problem hiding this comment.
if we really do want a hard limit we should add some fuzz to the batch size for overages:
| m_batchingMessageQueue.back().reserve(m_batchsize + 256); | |
| m_batchingMessageQueue.back().reserve(m_batchsize); |
|
linting step is broken we'll have to fix that on our end. you can ignore that for now 😄 |
Expected behavior
StatsdClientwithbatchsizebatchsize, remains in same batchbatchsizeif appended, gets put into a new batchbatchsizeActual behavior
StatsdClientwithbatchsizebatchsize, remains in same batchbatchsizeif appended, gets put into same batchbatchsizeSteps to reproduce
-) is used to indicate a batch